Goto

Collaborating Authors

 human ethics


Why giving AI 'human ethics' is probably a terrible idea

#artificialintelligence

If you want artificial intelligence to have human ethics, you have to teach it to evolve ethics like we do. At least that's what a pair of researchers from the International Institute of Information Technology in Bangalore, India proposed in a pre-print paper published today. Titled "AI and the Sense of Self," the paper describes a methodology called "elastic identity" by which the researchers say AI might learn to gain a greater sense of agency while simultaneously understanding how to avoid "collateral damage." In short, the researchers are suggesting that we teach AI to be more ethically-aligned with humans by allowing it to learn when it's appropriate to optimize for self and when its necessary to optimize for the good of a community. While we may be far from a comprehensive computational model of self, in this work, we focus on a specific characteristic of our sense of self that may hold the key for the innate sense of responsibility and ethics in humans.

  Country: Asia > India > Karnataka > Bengaluru (0.26)
  Genre: Research Report (0.57)

Done right, human ethics can ensure AI bias is curbed - Tech Wire Asia

#artificialintelligence

AI bias continues to be a prevailing problem when it comes to ensuring proper implementation of artificial intelligence (AI) in many industries. Since the technology has been implemented across several verticals, some of its use cases have been causing… unpleasantness among users. One of the biggest worries surrounding the sticky issue of AI bias, is in facial recognition solutions. As AI works purely on analyzing data inputs that it has access to, the algorithms may at times not provide entirely accurate results. In the case of facial recognition, the particular AI recognized certain races as criminals, causing an uproar in society.


Applying Human Ethics to Robots

#artificialintelligence

Robots are increasingly becoming common in everyday life. From robots that assist in blowing out fires to robots that help the elderly, it seems that robots are here to stay and, more importantly, here to help humanity. But how do you ensure that robots only help humanity? What ethics should robots abide by? And what do you do about potential lethal robots, robots meant to be used in war?


Teaching Machines About Human Ethics

#artificialintelligence

Advancement in artificial intelligence is picking up pace at a substantial level. Entering humans in to an era where decision making will be at least machine consulted, if not machine governed. Since, these intelligent machines or agents do not experience the same emotions and experiences as humans do. Their suggestions or outputs will more likely be calculated decisions, which sometimes are not appropriate from a human standpoint. At this stage it is essential that such intelligent agents are programmed so that their suggestions or outputs coincide with the human ethics and traditions.


Can Artificial Intelligence be balanced by Human Ethics? (via Techonomy) - The Futures Agency

#artificialintelligence

Interesting write-up by Jennifer L. Schenker published end of January on Techonomy. "We should not let Silicon Valley be the mission control for humanity," argues futurist Gerd Leonhard, author of a new book called Tech versus Humanity: The coming clash between man and machine. If autonomous AI software, crunching data far more rapidly than humans, can help eradicate disease and poverty and introduce societal improvements and efficiencies, then we must embrace it, Leonhard says. But "at the same time we have to have governance. And right now there is no such thing."